242 research outputs found

    Parathyroid hormone is a plausible mediator for the metabolic syndrome in the morbidly obese: a cross-sectional study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The biological mechanisms in the association between the metabolic syndrome (MS) and various biomarkers, such as 25-hydroxyvitamin D (vit D) and magnesium, are not fully understood. Several of the proposed predictors of MS are also possible predictors of parathyroid hormone (PTH). We aimed to explore whether PTH is a possible mediator between MS and various possible explanatory variables in morbidly obese patients.</p> <p>Methods</p> <p>Fasting serum levels of PTH, vit D and magnesium were assessed in a cross-sectional study of 1,017 consecutive morbidly obese patients (68% women). Dependencies between MS and a total of seven possible explanatory variables as suggested in the literature, including PTH, vit D and magnesium, were specified in a path diagram, including both direct and indirect effects. Possible gender differences were also included. Effects were estimated using Bayesian path analysis, a multivariable regression technique, and expressed using standardized regression coefficients.</p> <p>Results</p> <p>Sixty-eight percent of the patients had MS. In addition to type 2 diabetes and age, both PTH and serum phosphate had significant direct effects on MS; 0.36 (95% Credibility Interval (CrI) [0.15, 0.57]) and 0.28 (95% CrI [0.10,0.47]), respectively. However, due to significant gender differences, an increase in either PTH or phosphate corresponded to an increased OR for MS in women only. All proposed predictors of MS had significant direct effects on PTH, with vit D and phosphate the strongest; -0.27 (95% CrI [-0.33,-0.21]) and -0.26 (95% CrI [-0.32,-0.20]), respectively. Though neither vit D nor magnesium had significant direct effects on MS, for women they both affected MS indirectly, due to the strong direct effect of PTH on MS. For phosphate, the indirect effect on MS, mediated through serum calcium and PTH, had opposite sign than the direct effect, resulting in the total effect on MS being somewhat attenuated compared to the direct effect only.</p> <p>Conclusion</p> <p>Our results indicate that for women PTH is a plausible mediator in the association between MS and a range of explanatory variables, including vit D, magnesium and phosphate.</p

    Net benefit approaches to the evaluation of prediction models, molecular markers, and diagnostic tests

    Get PDF
    Many decisions in medicine involve trade-offs, such as between diagnosing patients with disease versus unnecessary additional testing for those who are healthy. Net benefit is an increasingly reported decision analytic measure that puts benefits and harms on the same scale. This is achieved by specifying an exchange rate, a clinical judgment of the relative value of benefits (such as detecting a cancer) and harms (such as unnecessary biopsy) associated with models, markers, and tests. The exchange rate can be derived by asking simple questions, such as the maximum number of patients a doctor would recommend for biopsy to find one cancer. As the answers to these sorts of questions are subjective, it is possible to plot net benefit for a range of reasonable exchange rates in a "decision curve." For clinical prediction models, the exchange rate is related to the probability threshold to determine whether a patient is classified as being positive or negative for a disease. Net benefit is useful for determining whether basing clinical decisions on a model, marker, or test would do more good than harm. This is in contrast to traditional measures such as sensitivity, specificity, or area under the curve, which are statistical abstractions not directly informative about clinical value. Recent years have seen an increase in practical applications of net benefit analysis to research data. This is a welcome development, since decision analytic techniques are of particular value when the purpose of a model, marker, or test is to help doctors make better clinical decisions

    Impact of predictor measurement heterogeneity across settings on performance of prediction models: a measurement error perspective

    Full text link
    It is widely acknowledged that the predictive performance of clinical prediction models should be studied in patients that were not part of the data in which the model was derived. Out-of-sample performance can be hampered when predictors are measured differently at derivation and external validation. This may occur, for instance, when predictors are measured using different measurement protocols or when tests are produced by different manufacturers. Although such heterogeneity in predictor measurement between deriviation and validation data is common, the impact on the out-of-sample performance is not well studied. Using analytical and simulation approaches, we examined out-of-sample performance of prediction models under various scenarios of heterogeneous predictor measurement. These scenarios were defined and clarified using an established taxonomy of measurement error models. The results of our simulations indicate that predictor measurement heterogeneity can induce miscalibration of prediction and affects discrimination and overall predictive accuracy, to extents that the prediction model may no longer be considered clinically useful. The measurement error taxonomy was found to be helpful in identifying and predicting effects of heterogeneous predictor measurements between settings of prediction model derivation and validation. Our work indicates that homogeneity of measurement strategies across settings is of paramount importance in prediction research.Comment: 32 pages, 4 figure

    Three myths about risk thresholds for prediction models

    Get PDF
    Acknowledgments This work was developed as part of the international initiative of strengthening analytical thinking for observational studies (STRATOS). The objective of STRATOS is to provide accessible and accurate guidance in the design and analysis of observational studies (http://stratos-initiative.org/). Members of the STRATOS Topic Group ‘Evaluating diagnostic tests and prediction models’ are Gary Collins, Carl Moons, Ewout Steyerberg, Patrick Bossuyt, Petra Macaskill, David McLernon, Ben van Calster, and Andrew Vickers. Funding The study is supported by the Research Foundation-Flanders (FWO) project G0B4716N and Internal Funds KU Leuven (project C24/15/037). Laure Wynants is a post-doctoral fellow of the Research Foundation – Flanders (FWO). The funding bodies had no role in the design of the study, collection, analysis, interpretation of data, nor in writing the manuscript. Contributions LW and BVC conceived the original idea of the manuscript, to which ES, MVS and DML then contributed. DT acquired the data. LW analyzed the data, interpreted the results and wrote the first draft. All authors revised the work, approved the submitted version, and are accountable for the integrity and accuracy of the work.Peer reviewedPublisher PD

    Machine learning algorithms performed no better than regression models for prognostication in traumatic brain injury

    Get PDF
    Objective: We aimed to explore the added value of common machine learning (ML) algorithms for prediction of outcome for moderate and severe traumatic brain injury.Study Design and Setting: We performed logistic regression (LR), lasso regression, and ridge regression with key baseline predictors in the IMPACT-II database (15 studies, n = 11,022). ML algorithms included support vector machines, random forests, gradient boosting machines, and artificial neural networks and were trained using the same predictors. To assess generalizability of predictions, we performed internal, internal-external, and external validation on the recent CENTER-TBI study (patients with Glasgow Coma ScaleResults: In the IMPACT-II database, 3,332/11,022 (30%) died and 5,233(48%) had unfavorable outcome (Glasgow Outcome Scale less than 4). In the CENTER-TBI study, 348/1,554(29%) died and 651(54%) had unfavorable outcome. Discrimination and calibration varied widely between the studies and less so between the studied algorithms. The mean area under the curve was 0.82 for mortality and 0.77 for unfavorable outcomes in the CENTER-TBI study.Conclusion: ML algorithms may not outperform traditional regression approaches in a low-dimensional setting for outcome prediction after moderate or severe traumatic brain injury. Similar to regression-based prediction models, ML algorithms should be rigorously validated to ensure applicability to new populations. (C) 2020 The Authors. Published by Elsevier Inc.</p

    Evaluation of clinical prediction models (part 3): calculating the sample size required for an external validation study

    Get PDF
    An external validation study evaluates the performance of a prediction model in new data, but many of these studies are too small to provide reliable answers. In the third article of their series on model evaluation, Riley and colleagues describe how to calculate the sample size required for external validation studies, and propose to avoid rules of thumb by tailoring calculations to the model and setting at hand

    Does ignoring clustering in multicenter data influence the performance of prediction models? A simulation study

    Get PDF
    Clinical risk prediction models are increasingly being developed and validated on multicenter datasets. In this article, we present a comprehensive framework for the evaluation of the predictive performance of prediction models at the center level and the population level, considering population-averaged predictions, center-specific predictions, and predictions assuming an average random center effect. We demonstrated in a simulation study that calibration slopes do not only deviate from one because of over- or underfitting of patterns in the development dataset, but also as a result of the choice of the model (standard versus mixed effects logistic regression), the type of predictions (marginal versus conditional versus assuming an average random effect), and the level of model validation (center versus population). In particular, when data is heavily clustered (ICC 20%), center-specific predictions offer the best predictive performance at the population level and the center level. We recommend that models should reflect the data structure, while the level of model validation should reflect the research question

    Critical appraisal of artificial intelligence-based prediction models for cardiovascular disease

    Get PDF
    The medical field has seen a rapid increase in the development of artificial intelligence (AI)-based prediction models. With the introduction of such AI-based prediction model tools and software in cardiovascular patient care, the cardiovascular researcher and healthcare professional are challenged to understand the opportunities as well as the limitations of the AI-based predictions. In this article, we present 12 critical questions for cardiovascular health professionals to ask when confronted with an AI-based prediction model. We aim to support medical professionals to distinguish the AI-based prediction models that can add value to patient care from the AI that does not
    corecore